Some or most of you have probably taken some undergraduate- or graduate-level statistics courses. Unfortunately, the curricula for most introductory statisics courses are mostly focused on conducting statistical hypothesis tests as the primary means for interest: t-tests, chi-squared tests, analysis of variance, etc. Such tests seek to esimate whether groups or effects are "statistically significant", a concept that is poorly understood, and hence often misused, by most practioners. Even when interpreted correctly, statistical significance is a questionable goal for statistical inference, as it is of limited utility.
A far more powerful approach to statistical analysis involves building flexible models with the overarching aim of estimating quantities of interest. This section of the tutorial illustrates how to use Python to build statistical models of low to moderate difficulty from scratch, and use them to extract estimates and associated measures of uncertainty.
In [ ]:
import numpy as np
import pandas as pd
# Set some Pandas options
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 20)
pd.set_option('display.max_rows', 25)
An recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data.
In parametric inference, we specify a priori a suitable distribution, then choose the parameters that best fit the data.
In [ ]:
x = array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604,
5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745,
1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 ,
0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357,
1.20611533, 2.16594393, 5.56610242, 4.66479977, 2.3573932 ])
_ = hist(x, bins=8)
We start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by ‘best fit’. There are two commonly used criteria:
e.g. Poisson distribution
The Poisson distribution models unbounded counts:
e.g. normal distribution
The dataset nashville_precip.txt
contains NOAA precipitation data for Nashville measured since 1871. The gamma distribution is often a good fit to aggregated rainfall data, and will be our candidate distribution in this case.
In [ ]:
precip = pd.read_table("data/nashville_precip.txt", index_col=0, na_values='NA', delim_whitespace=True)
precip.head()
In [ ]:
_ = precip.hist(sharex=True, sharey=True, grid=False)
tight_layout()
The first step is recognixing what sort of distribution to fit our data to. A couple of observations:
There are a few possible choices, but one suitable alternative is the gamma distribution:
The method of moments simply assigns the empirical mean and variance to their theoretical counterparts, so that we can solve for the parameters.
So, for the gamma distribution, the mean and variance are:
So, if we solve for these parameters, we can use a gamma distribution to describe our data:
Let's deal with the missing value in the October data. Given what we are trying to do, it is most sensible to fill in the missing value with the average of the available values.
In [ ]:
precip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True)
Now, let's calculate the sample moments of interest, the means and variances by month:
In [ ]:
precip_mean = precip.mean()
precip_mean
In [ ]:
precip_var = precip.var()
precip_var
We then use these moments to estimate $\alpha$ and $\beta$ for each month:
In [ ]:
alpha_mom = precip_mean ** 2 / precip_var
beta_mom = precip_var / precip_mean
In [ ]:
alpha_mom, beta_mom
We can use the gamma.pdf
function in scipy.stats.distributions
to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January:
In [ ]:
from scipy.stats.distributions import gamma
hist(precip.Jan, normed=True, bins=20)
plot(linspace(0, 10), gamma.pdf(linspace(0, 10), alpha_mom[0], beta_mom[0]))
Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution:
In [ ]:
axs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False)
for ax in axs.ravel():
# Get month
m = ax.get_title()
# Plot fitted distribution
x = linspace(*ax.get_xlim())
ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m]))
# Annotate with parameter estimates
label = 'alpha = {0:.2f}\nbeta = {1:.2f}'.format(alpha_mom[m], beta_mom[m])
ax.annotate(label, xy=(10, 0.2))
tight_layout()
Maximum likelihood (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties.
There is a ton of theory regarding ML. We will restrict ourselves to the mechanics here.
Say we have some data $y = y_1,y_2,\ldots,y_n$ that is distributed according to some distribution:
Here, for example, is a Poisson distribution that describes the distribution of some discrete variables, typically counts:
In [ ]:
y = np.random.poisson(5, size=100)
plt.hist(y, bins=12, normed=True)
xlabel('y'); ylabel('Pr(y)')
The product $\prod_{i=1}^n Pr(y_i | \theta)$ gives us a measure of how likely it is to observe values $y_1,\ldots,y_n$ given the parameters $\theta$. Maximum likelihood fitting consists of choosing the appropriate function $l= Pr(Y|\theta)$ to maximize for a given set of observations. We call this function the likelihood function, because it is a measure of how likely the observations are if the model is true.
Given these data, how likely is this model?
In the above model, the data were drawn from a Poisson distribution with parameter $\lambda =5$.
$$L(y|\lambda=5) = \frac{e^{-5} 5^y}{y!}$$So, for any given value of $y$, we can calculate its likelihood:
In [ ]:
poisson_like = lambda x, lam: np.exp(-lam) * (lam**x) / (np.arange(x)+1).prod()
lam = 6
value = 10
poisson_like(value, lam)
In [ ]:
np.sum(poisson_like(yi, lam) for yi in y)
In [ ]:
lam = 8
np.sum(poisson_like(yi, lam) for yi in y)
We can plot the likelihood function for any value of the parameter(s):
In [ ]:
lambdas = np.linspace(0,15)
x = 5
plt.plot(lambdas, [poisson_like(x, l) for l in lambdas])
xlabel('$\lambda$')
ylabel('L($\lambda$|x={0})'.format(x))
How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$.
In [ ]:
lam = 5
xvals = arange(15)
plt.bar(xvals, [poisson_like(x, lam) for x in xvals])
xlabel('x')
ylabel('Pr(X|$\lambda$=5)')
Why are we interested in the likelihood function?
A reasonable estimate of the true, unknown value for the parameter is one which maximizes the likelihood function. So, inference is reduced to an optimization problem.
Going back to the rainfall data, if we are using a gamma distribution we need to maximize:
$$\begin{align}l(\alpha,\beta) &= \sum_{i=1}^n \log[\beta^{\alpha} x^{\alpha-1} e^{-x/\beta}\Gamma(\alpha)^{-1}] \cr &= n[(\alpha-1)\overline{\log(x)} - \bar{x}\beta + \alpha\log(\beta) - \log\Gamma(\alpha)]\end{align}$$(Its usually easier to work in the log scale)
where $n = 2012 − 1871 = 141$ and the bar indicates an average over all i. We choose $\alpha$ and $\beta$ to maximize $l(\alpha,\beta)$.
Notice $l$ is infinite if any $x$ is zero. We do not have any zeros, but we do have an NA value for one of the October data, which we dealt with above.
To find the maximum of any function, we typically take the derivative with respect to the variable to be maximized, set it to zero and solve for that variable.
$$\frac{\partial l(\alpha,\beta)}{\partial \beta} = n\left(\frac{\alpha}{\beta} - \bar{x}\right) = 0$$Which can be solved as $\beta = \alpha/\bar{x}$. However, plugging this into the derivative with respect to $\alpha$ yields:
$$\frac{\partial l(\alpha,\beta)}{\partial \alpha} = \log(\alpha) + \overline{\log(x)} - \log(\bar{x}) - \frac{\Gamma(\alpha)'}{\Gamma(\alpha)} = 0$$This has no closed form solution. We must use numerical optimization!
Numerical optimization alogarithms take an initial "guess" at the solution, and iteratively improve the guess until it gets "close enough" to the answer.
Here, we will use Newton-Raphson algorithm:
Which is available to us via SciPy:
In [ ]:
from scipy.optimize import newton
Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function:
In [ ]:
# some function
func = lambda x: 3./(1 + 400*np.exp(-2*x)) - 1
xvals = np.linspace(0, 6)
plot(xvals, func(xvals))
text(5.3, 2.1, '$f(x)$', fontsize=16)
# zero line
plot([0,6], [0,0], 'k-')
# value at step n
plot([4,4], [0,func(4)], 'k:')
plt.text(4, -.2, '$x_n$', fontsize=16)
# tangent line
tanline = lambda x: -0.858 + 0.626*x
plot(xvals, tanline(xvals), 'r--')
# point at step n+1
xprime = 0.858/0.626
plot([xprime, xprime], [tanline(xprime), func(xprime)], 'k:')
plt.text(xprime+.1, -.2, '$x_{n+1}$', fontsize=16)
To apply the Newton-Raphson algorithm, we need a function that returns a vector containing the first and second derivatives of the function with respect to the variable of interest. In our case, this is:
In [ ]:
from scipy.special import psi, polygamma
dlgamma = lambda m, log_mean, mean_log: np.log(m) - psi(m) - log_mean + mean_log
dl2gamma = lambda m, *args: 1./m - polygamma(1, m)
where log_mean
and mean_log
are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi
and polygamma
are complex functions of the Gamma function that result when you take first and second derivatives of that function.
In [ ]:
# Calculate statistics
log_mean = precip.mean().apply(log)
mean_log = precip.apply(log).mean()
Time to optimize!
In [ ]:
# Alpha MLE for December
alpha_mle = newton(dlgamma, 2, dl2gamma, args=(log_mean[-1], mean_log[-1]))
alpha_mle
And now plug this back into the solution for beta:
In [ ]:
beta_mle = alpha_mle/precip.mean()[-1]
beta_mle
We can compare the fit of the estimates derived from MLE to those from the method of moments:
In [ ]:
dec = precip.Dec
dec.hist(normed=True, bins=10, grid=False)
x = linspace(0, dec.max())
plot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-')
plot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--')
For some common distributions, SciPy includes methods for fitting via MLE:
In [ ]:
from scipy.stats import gamma
gamma.fit(precip.Dec)
This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit
method fits an odd 3-parameter version of the gamma distribution.
Suppose that we observe $Y$ truncated below at $a$ (where $a$ is known). If $X$ is the distribution of our observation, then:
$$ P(X \le x) = P(Y \le x|Y \gt a) = \frac{P(a \lt Y \le x)}{P(Y \gt a)}$$(so, $Y$ is the original variable and $X$ is the truncated variable)
Then X has the density:
$$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$Suppose $Y \sim N(\mu, \sigma^2)$ and $x_1,\ldots,x_n$ are independent observations of $X$. We can use maximum likelihood to find $\mu$ and $\sigma$.
First, we can simulate a truncated distribution using a while
statement to eliminate samples that are outside the support of the truncated distribution.
In [ ]:
x = np.random.normal(size=10000)
a = -1
x_small = x < a
while x_small.sum():
x[x_small] = np.random.normal(size=x_small.sum())
x_small = x < a
_ = hist(x, bins=100)
We can construct a log likelihood for this function using the conditional form:
$$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$
In [ ]:
from scipy.stats.distributions import norm
trunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) -
np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum()
For this example, we will use another optimization algorithm, the Nelder-Mead simplex algorithm. It has a couple of advantages:
SciPy implements this algorithm in its fmin
function:
In [ ]:
from scipy.optimize import fmin
fmin(trunc_norm, np.array([1,2]), args=(-1, x))
In general, simulating data is a terrific way of testing your model before using it with real data.
In some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disribution non-parametrically (i.e. making no assumptions about the form of the underlying distribution) using kernel density estimation.
In [ ]:
# Some random data
y = np.random.random(15) * 10
y
In [ ]:
x = np.linspace(0, 10, 100)
# Smoothing parameter
s = 0.4
# Calculate the kernels
kernels = np.transpose([norm.pdf(x, yi, s) for yi in y])
plot(x, kernels, 'k:')
plot(x, kernels.sum(1))
plot(y, np.zeros(len(y)), 'ro', ms=10)
SciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution:
In [ ]:
# Create a bi-modal distribution with a mixture of Normals.
x1 = np.random.normal(0, 3, 50)
x2 = np.random.normal(4, 1, 50)
# Append by row
x = np.r_[x1, x2]
In [ ]:
plt.hist(x, bins=8, normed=True)
In [ ]:
from scipy.stats import kde
density = kde.gaussian_kde(x)
xgrid = np.linspace(x.min(), x.max(), 100)
plt.hist(x, bins=8, normed=True)
plt.plot(xgrid, density(xgrid), 'r-')
Recall the cervical dystonia database, which is a clinical trial of botulinum toxin type B (BotB) for patients with cervical dystonia from nine U.S. sites. The response variable is measurements on the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment). One way to check the efficacy of the treatment is to compare the distribution of TWSTRS for control and treatment patients at the end of the study.
Use the method of moments or MLE to calculate the mean and variance of TWSTRS at week 16 for one of the treatments and the control group. Assume that the distribution of the twstrs
variable is normal:
In [ ]:
cdystonia = pd.read_csv("data/cdystonia.csv")
cdystonia[cdystonia.obs==6].hist(column='twstrs', by=cdystonia.treat, bins=8)
In [ ]:
A general, primary goal of many statistical data analysis tasks is to relate the influence of one variable on another. For example, we may wish to know how different medical interventions influence the incidence or duration of disease, or perhaps a how baseball player's performance varies as a function of age.
In [ ]:
x = np.array([2.2, 4.3, 5.1, 5.8, 6.4, 8.0])
y = np.array([0.4, 10.1, 14.0, 10.9, 15.4, 18.5])
plot(x,y,'ro')
We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$.
where $f$ is some function, for example a linear function:
and $\epsilon_i$ accounts for the difference between the observed response $y_i$ and its prediction from the model $\hat{y_i} = \beta_0 + \beta_1 x_i$. This is sometimes referred to as process uncertainty.
We would like to select $\beta_0, \beta_1$ so that the difference between the predictions and the observations is zero, but this is not usually possible. Instead, we choose a reasonable criterion: the smallest sum of the squared differences between $\hat{y}$ and $y$.
Squaring serves two purposes: (1) to prevent positive and negative values from cancelling each other out and (2) to strongly penalize large deviations. Whether the latter is a good thing or not depends on the goals of the analysis.
In other words, we will select the parameters that minimize the squared error of the model.
In [ ]:
ss = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x) ** 2)
In [ ]:
ss([0,1],x,y)
In [ ]:
b0,b1 = fmin(ss, [0,1], args=(x,y))
b0,b1
In [ ]:
plot(x, y, 'ro')
plot([0,10], [b0, b0+b1*10])
In [ ]:
plot(x, y, 'ro')
plot([0,10], [b0, b0+b1*10])
for xi, yi in zip(x,y):
plot([xi]*2, [yi, b0+b1*xi], 'k:')
xlim(2, 9); ylim(0, 20)
Minimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences:
In [ ]:
sabs = lambda theta, x, y: np.sum(np.abs(y - theta[0] - theta[1]*x))
b0,b1 = fmin(sabs, [0,1], args=(x,y))
print b0,b1
plot(x, y, 'ro')
plot([0,10], [b0, b0+b1*10])
We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model:
In [ ]:
ss2 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)) ** 2)
b0,b1,b2 = fmin(ss2, [1,1,-1], args=(x,y))
print b0,b1,b2
plot(x, y, 'ro')
xvals = np.linspace(0, 10, 100)
plot(xvals, b0 + b1*xvals + b2*(xvals**2))
Although polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters.
For some data, it may be reasonable to consider polynomials of order>2. For example, consider the relationship between the number of home runs a baseball player hits and the number of runs batted in (RBI) they accumulate; clearly, the relationship is positive, but we may not expect a linear relationship.
In [ ]:
ss3 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2) - theta[3]*(x**3)) ** 2)
bb = pd.read_csv("data/baseball.csv", index_col=0)
plot(bb.hr, bb.rbi, 'r.')
b0,b1,b2,b3 = fmin(ss3, [0,1,-1,0], args=(bb.hr, bb.rbi))
xvals = arange(40)
plot(xvals, b0 + b1*xvals + b2*(xvals**2) + b3*(xvals**3))
Of course, we need not fit least squares models by hand. The statsmodels
package implements least squares models that allow for model fitting in a single line:
In [ ]:
import statsmodels.api as sm
straight_line = sm.OLS(y, sm.add_constant(x)).fit()
straight_line.summary()
In [ ]:
from statsmodels.formula.api import ols as OLS
data = pd.DataFrame(dict(x=x, y=y))
cubic_fit = OLS('y ~ x + I(x**2)', data).fit()
cubic_fit.summary()
In [ ]:
In [ ]:
def calc_poly(params, data):
x = np.c_[[data**i for i in range(len(params))]]
return np.dot(params, x)
ssp = lambda theta, x, y: np.sum((y - calc_poly(theta, x)) ** 2)
betas = fmin(ssp, np.zeros(10), args=(x,y), maxiter=1e6)
plot(x, y, 'ro')
xvals = np.linspace(0, max(x), 100)
plot(xvals, calc_poly(betas, xvals))
One approach is to use an information-theoretic criterion to select the most appropriate model. For example Akaike's Information Criterion (AIC) balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC as:
$$AIC = n \log(\hat{\sigma}^2) + 2p$$where $p$ is the number of parameters in the model and $\hat{\sigma}^2 = RSS/(n-p-1)$.
Notice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases.
To apply AIC to model selection, we choose the model that has the lowest AIC value.
In [ ]:
n = len(x)
aic = lambda rss, p, n: n * np.log(rss/(n-p-1)) + 2*p
RSS1 = ss(fmin(ss, [0,1], args=(x,y)), x, y)
RSS2 = ss2(fmin(ss2, [1,1,-1], args=(x,y)), x, y)
print aic(RSS1, 2, n), aic(RSS2, 3, n)
Hence, we would select the 2-parameter (linear) model.
Fitting a line to the relationship between two variables using the least squares approach is sensible when the variable we are trying to predict is continuous, but what about when the data are dichotomous?
Let's consider the problem of predicting survival in the Titanic disaster, based on our available information. For example, lets say that we want to predict survival as a function of the fare paid for the journey.
In [ ]:
titanic = pd.read_excel("data/titanic.xls", "titanic")
titanic.name
In [ ]:
jitter = np.random.normal(scale=0.02, size=len(titanic))
plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3)
yticks([0,1])
ylabel("survived")
xlabel("log(fare)")
I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale.
Clearly, fitting a line through this data makes little sense, for several reasons. First, for most values of the predictor variable, the line would predict values that are not zero or one. Second, it would seem odd to choose least squares (or similar) as a criterion for selecting the best line.
In [ ]:
x = np.log(titanic.fare[titanic.fare>0])
y = titanic.survived[titanic.fare>0]
betas_titanic = fmin(ss, [1,1], args=(x,y))
In [ ]:
jitter = np.random.normal(scale=0.02, size=len(titanic))
plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3)
yticks([0,1])
ylabel("survived")
xlabel("log(fare)")
plt.plot([0,7], [betas_titanic[0], betas_titanic[0] + betas_titanic[1]*7.])
If we look at this data, we can see that for most values of fare
, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the "survived" (y=1) side for larger values of fare than on the "died" (y=0) side.
Rather than model the binary outcome explicitly, it makes sense instead to model the probability of death or survival in a stochastic model. Probabilities are measured on a continuous [0,1] scale, which may be more amenable for prediction using a regression line. We need to consider a different probability model for this exerciese however; let's consider the Bernoulli distribution as a generative model for our data:
where $y = \{0,1\}$ and $p \in [0,1]$. So, this model predicts whether $y$ is zero or one as a function of the probability $p$. Notice that when $y=1$, the $1-p$ term disappears, and when $y=0$, the $p$ term disappears.
So, the model we want to fit should look something like this:
However, since $p$ is constrained to be between zero and one, it is easy to see where a linear (or polynomial) model might predict values outside of this range. We can modify this model sligtly by using a link function to transform the probability to have an unbounded range on a new scale. Specifically, we can use a logit transformation as our link function:
Here's a plot of $p/(1-p)$
In [ ]:
logit = lambda p: np.log(p/(1.-p))
unit_interval = np.linspace(0,1)
plt.plot(unit_interval/(1-unit_interval), unit_interval)
And here's the logit function:
In [ ]:
plt.plot(logit(unit_interval), unit_interval)
The inverse of the logit transformation is:
So, now our model is:
We can fit this model using maximum likelihood. Our likelihood, again based on the Bernoulli model is:
which, on the log scale is:
We can easily implement this in Python, keeping in mind that fmin
minimizes, rather than maximizes functions:
In [ ]:
invlogit = lambda x: 1. / (1 + np.exp(-x))
def logistic_like(theta, x, y):
p = invlogit(theta[0] + theta[1] * x)
# Return negative of log-likelihood
return -np.sum(y * np.log(p) + (1-y) * np.log(1 - p))
Remove null values from variables
In [ ]:
x, y = titanic[titanic.fare.notnull()][['fare', 'survived']].values.T
... and fit the model.
In [ ]:
b0,b1 = fmin(logistic_like, [0.5,0], args=(x,y))
b0, b1
In [ ]:
jitter = np.random.normal(scale=0.01, size=len(x))
plot(x, y+jitter, 'r.', alpha=0.3)
yticks([0,.25,.5,.75,1])
xvals = np.linspace(0, 600)
plot(xvals, invlogit(b0+b1*xvals))
As with our least squares model, we can easily fit logistic regression models in statsmodels
, in this case using the GLM
(generalized linear model) class with a binomial error distribution specified.
In [ ]:
logistic = sm.GLM(y, sm.add_constant(x), family=sm.families.Binomial()).fit()
logistic.summary()
In [ ]:
Parametric inference can be non-robust:
Parmetric inference can be difficult:
An alternative is to estimate the sampling distribution of a statistic empirically without making assumptions about the form of the population.
We have seen this already with the kernel density estimate.
The bootstrap is a resampling method discovered by Brad Efron that allows one to approximate the true sampling distribution of a dataset, and thereby obtain estimates of the mean and variance of the distribution.
Bootstrap sample:
$S_i^*$ is a sample of size $n$, with replacement.
In Python, we have already seen the NumPy function permutation
that can be used in conjunction with Pandas' take
method to generate a random sample of some data without replacement:
In [ ]:
np.random.permutation(titanic.name)[:5]
Similarly, we can use the random.randint
method to generate a sample with replacement, which we can use when bootstrapping.
In [ ]:
random_ind = np.random.randint(0, len(titanic), 5)
titanic.name[random_ind]
We regard S as an "estimate" of population P
population : sample :: sample : bootstrap sample
The idea is to generate replicate bootstrap samples:
Compute statistic $t$ (estimate) for each bootstrap sample:
In [ ]:
n = 10
R = 1000
# Original sample (n=10)
x = np.random.normal(size=n)
# 1000 bootstrap samples of size 10
s = [x[np.random.randint(0,n,n)].mean() for i in range(R)]
_ = hist(s, bins=30)
In [ ]:
boot_mean = np.sum(s)/R
boot_mean
In [ ]:
boot_var = ((np.array(s) - boot_mean) ** 2).sum() / (R-1)
boot_var
Since we have estimated the expectation of the bootstrapped statistics, we can estimate the bias of T:
$$\hat{B}^* = \bar{T}^* - T$$
In [ ]:
boot_mean - np.mean(x)
An attractive feature of bootstrap statistics is the ease with which you can obtain an estimate of uncertainty for a given statistic. We simply use the empirical quantiles of the bootstrapped statistics to obtain percentiles corresponding to a confidence interval of interest.
This employs the ordered bootstrap replicates:
$$T_{(1)}^*, T_{(2)}^*, \ldots, T_{(R)}^*$$Simply extract the $100(\alpha/2)$ and $100(1-\alpha/2)$ percentiles:
$$T_{[(R+1)\alpha/2]}^* \lt \theta \lt T_{[(R+1)(1-\alpha/2)]}^*$$
In [ ]:
s_sorted = np.sort(s)
s_sorted[:10]
In [ ]:
s_sorted[-10:]
In [ ]:
alpha = 0.05
s_sorted[[(R+1)*alpha/2, (R+1)*(1-alpha/2)]]
In [ ]: